130 research outputs found
Low Power Design Methodology
Due to widespread application of portable electronic devices and the evaluation of microelectronic technology, power dissipation has become a critical parameter in low power VLSI circuit designs. In emerging VLSI technology, the circuit complexity and high speed imply significant increase in the power consumption. In low power CMOS VLSI circuits, the energy dissipation is caused by charging and discharging of internal node capacitances due to transition activity, which is one of the major factors that also affect the dynamic power dissipation. The reduction in power, area and the improvement of speed require optimization at all levels of design procedures. Here various design methodologies are discussed to achieve our required low power design concepts
Differentially Private Reward Estimation with Preference Feedback
Learning from preference-based feedback has recently gained considerable
traction as a promising approach to align generative models with human
interests. Instead of relying on numerical rewards, the generative models are
trained using reinforcement learning with human feedback (RLHF). These
approaches first solicit feedback from human labelers typically in the form of
pairwise comparisons between two possible actions, then estimate a reward model
using these comparisons, and finally employ a policy based on the estimated
reward model. An adversarial attack in any step of the above pipeline might
reveal private and sensitive information of human labelers. In this work, we
adopt the notion of label differential privacy (DP) and focus on the problem of
reward estimation from preference-based feedback while protecting privacy of
each individual labelers. Specifically, we consider the parametric
Bradley-Terry-Luce (BTL) model for such pairwise comparison feedback involving
a latent reward parameter . Within a standard
minimax estimation framework, we provide tight upper and lower bounds on the
error in estimating under both local and central models of DP. We
show, for a given privacy budget and number of samples , that the
additional cost to ensure label-DP under local model is , while it is
under the weaker central
model. We perform simulations on synthetic data that corroborate these
theoretical results
Learning for Interval Prediction of Electricity Demand: A Cluster-based Bootstrapping Approach
Accurate predictions of electricity demands are necessary for managing
operations in a small aggregation load setting like a Microgrid. Due to low
aggregation, the electricity demands can be highly stochastic and point
estimates would lead to inflated errors. Interval estimation in this scenario,
would provide a range of values within which the future values might lie and
helps quantify the errors around the point estimates. This paper introduces a
residual bootstrap algorithm to generate interval estimates of day-ahead
electricity demand. A machine learning algorithm is used to obtain the point
estimates of electricity demand and respective residuals on the training set.
The obtained residuals are stored in memory and the memory is further
partitioned. Days with similar demand patterns are grouped in clusters using an
unsupervised learning algorithm and these clusters are used to partition the
memory. The point estimates for test day are used to find the closest cluster
of similar days and the residuals are bootstrapped from the chosen cluster.
This algorithm is evaluated on the real electricity demand data from EULR(End
Use Load Research) and is compared to other bootstrapping methods for varying
confidence intervals
PU Learning for Matrix Completion
In this paper, we consider the matrix completion problem when the
observations are one-bit measurements of some underlying matrix M, and in
particular the observed samples consist only of ones and no zeros. This problem
is motivated by modern applications such as recommender systems and social
networks where only "likes" or "friendships" are observed. The problem of
learning from only positive and unlabeled examples, called PU
(positive-unlabeled) learning, has been studied in the context of binary
classification. We consider the PU matrix completion problem, where an
underlying real-valued matrix M is first quantized to generate one-bit
observations and then a subset of positive entries is revealed. Under the
assumption that M has bounded nuclear norm, we provide recovery guarantees for
two different observation models: 1) M parameterizes a distribution that
generates a binary matrix, 2) M is thresholded to obtain a binary matrix. For
the first case, we propose a "shifted matrix completion" method that recovers M
using only a subset of indices corresponding to ones, while for the second
case, we propose a "biased matrix completion" method that recovers the
(thresholded) binary matrix. Both methods yield strong error bounds --- if M is
n by n, the Frobenius error is bounded as O(1/((1-rho)n), where 1-rho denotes
the fraction of ones observed. This implies a sample complexity of O(n\log n)
ones to achieve a small error, when M is dense and n is large. We extend our
methods and guarantees to the inductive matrix completion problem, where rows
and columns of M have associated features. We provide efficient and scalable
optimization procedures for both the methods and demonstrate the effectiveness
of the proposed methods for link prediction (on real-world networks consisting
of over 2 million nodes and 90 million links) and semi-supervised clustering
tasks
- …